Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-24: Promote Apparmor to GA #3298

Closed
wants to merge 29 commits into from

Conversation

mccormickt
Copy link

KEP-24: Promote Apparmor to GA

New updated KEP outlining requirement for Apparmor to move to GA.

Supersedes #1444

saschagrunert and others added 4 commits May 19, 2021 09:30
Signed-off-by: Sascha Grunert <sgrunert@suse.com>
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels May 7, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @jan0ski!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 7, 2022
@k8s-ci-robot
Copy link
Contributor

Hi @jan0ski. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label May 7, 2022
@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label May 7, 2022
@k8s-ci-robot k8s-ci-robot added the sig/node Categorizes an issue or PR as relevant to SIG Node. label May 7, 2022
Copy link
Member

@kikisdeliveryservice kikisdeliveryservice left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of general comments, but since this is based on a kep draft from Sacha from a year ago, some changes have occurred in the KEP template. I've marked the two sections that definitely need to pull in the new template (PRR and Test Plan), but suggest that you go through and ensure you're using the current version for all sections :)

keps/sig-node/24-apparmor-ga/README.md Outdated Show resolved Hide resolved
keps/sig-node/24-apparmor-ga/README.md Outdated Show resolved Hide resolved
keps/sig-node/24-apparmor-ga/README.md Outdated Show resolved Hide resolved
keps/sig-node/24-apparmor-ga/README.md Outdated Show resolved Hide resolved
keps/sig-node/24-apparmor-ga/README.md Outdated Show resolved Hide resolved
@mccormickt mccormickt marked this pull request as ready for review May 17, 2022 02:44
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 17, 2022
@dchen1107
Copy link
Member

/assign @tallclair

@sftim
Copy link
Contributor

sftim commented Jul 17, 2022

Before we graduate this, can we check whether https://kubernetes.io/docs/tutorials/security/apparmor/ is current? That page talks a lot about requiring Kubernetes v1.4 or later, so my guess is that it does need an update.

Copy link
Contributor

@sftim sftim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The path keps/sig-node/24-apparmor-ga looks wrong. We usually don't put -ga suffixes on short names for KEPs; the intention to stabilize is implicit.

@dchen1107 dchen1107 added this to the v1.26 milestone Jul 28, 2022
@mccormickt
Copy link
Author

Looks like there are still some PRR questions open.
...
If you have time to address the open debuggability concerns that works too.

@tallclair can you point these concerns out for me? I might be missing context, and had assumed PRR questions were addressed already. Happy to try and answer whatever I can.

@deads2k
Copy link
Contributor

deads2k commented Feb 8, 2023

Back in october the PRR was approved #3298 (comment)

There was a new question added to scalability: https://github.com/kubernetes/enhancements/blob/master/keps/NNNN-kep-template/README.md#scalability , but that should be easy to update.

##### e2e tests

AppArmor already has [e2e tests][https://github.com/kubernetes/kubernetes/blob/2f6c4f5eab85d3f15cd80d21f4a0c353a8ceb10b/test/e2e_node/apparmor_test.go],
but the tests are guarded by the `[Feature:AppArmor]` tag and not run in the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The AppArmor fields on a pod are immutable, which also applies to the
[annotation](https://github.com/kubernetes/kubernetes/blob/b46612a74224b0871a97dae819f5fb3a1763d0b9/pkg/apis/core/validation/validation.go#L177-L182).

When an [Ephemeral Container](20190212-ephemeral-containers.md) is added, it
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ephemeral containers are GA now: #277

The tests section doesn't mention e2e tests validating ephemeral containers behavior. Should this be added?

@SergeyKanzhelev
Copy link
Member

All the feedback left on this KEP can be addressed by adding action items into the Graduation Criteria for GA. There is no absolutely blocking comments, mostly work items

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: deads2k, jan0ski, tallclair
Once this PR has been reviewed and has the lgtm label, please ask for approval from derekwaynecarr. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

available to Localhost profiles. This prevents profiles meant for other system
daemons to be utilized by Kubernetes and will be configurable by the kubelet.

###### Updating AppArmor profiles
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be nice to cover this behavior as well: kubernetes/kubernetes#116003

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 24, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 23, 2023
@pjbgf
Copy link
Member

pjbgf commented Jul 4, 2023

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 4, 2023
@tallclair tallclair added the triage/accepted Indicates an issue or PR is ready to be actively worked on. label Aug 21, 2023
@tallclair tallclair modified the milestones: v1.27, v1.30 Jan 12, 2024
@tallclair
Copy link
Member

@mccormickt I'd like to drive this forward in v1.30. Are you still open to working on it, or should I take over this PR?

| 2) Using custom or `runtime/default` profile that restricts actions a container is trying to make. | Pod created | The outcome is workload and AppArmor dependent. In this scenario containers may 1) fail to start, 2) misbehave or 3) log violations. |
| 3) Using custom profile that does not exist on the node. | Pod created | Containers fail to start. Retry respecting RestartPolicy and back-off delay. |
| 4) Using an unsupported runtime profile (i.e. `runtime/default-audit`). | Pod **not** created. | N/A |
| 5) Using localhost profile with invalid name | Pod **not** created. | N/A |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is not currently any validation of the localhost profile name. We should add field validation, but need to handle annotation validation carefully.

object, events, or a warning as described in [KEP
#1693](/keps/sig-api-machinery/1693-warnings).

#### PodSecurityPolicy Support
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be updated for PodSecurity admission

| -------------------------------------------------------------------------------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) Using localhost or `runtime/default` profile when container runtime does not support AppArmor. | Pod created | The outcome is container runtime dependent. In this scenario containers may 1) fail to start or 2) run normally without having its policies enforced. |
| 2) Using custom or `runtime/default` profile that restricts actions a container is trying to make. | Pod created | The outcome is workload and AppArmor dependent. In this scenario containers may 1) fail to start, 2) misbehave or 3) log violations. |
| 3) Using custom profile that does not exist on the node. | Pod created | Containers fail to start. Retry respecting RestartPolicy and back-off delay. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
| 3) Using custom profile that does not exist on the node. | Pod created | Containers fail to start. Retry respecting RestartPolicy and back-off delay. |
| 3) Using localhost profile that does not exist on the node. | Pod created | Containers fail to start. Retry respecting RestartPolicy and back-off delay. |

| -------------------------------------------------------------------------------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1) Using localhost or `runtime/default` profile when container runtime does not support AppArmor. | Pod created | The outcome is container runtime dependent. In this scenario containers may 1) fail to start or 2) run normally without having its policies enforced. |
| 2) Using custom or `runtime/default` profile that restricts actions a container is trying to make. | Pod created | The outcome is workload and AppArmor dependent. In this scenario containers may 1) fail to start, 2) misbehave or 3) log violations. |
| 3) Using custom profile that does not exist on the node. | Pod created | Containers fail to start. Retry respecting RestartPolicy and back-off delay. |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this outcome managed by the Kubelet or the container runtime?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is the error surfaced to the user?

Comment on lines +310 to +312
If a Pod with a container specifies an AppArmor profile by field/annotation,
then the container will only apply the Pod level field/annotation if none
are set on the container level.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If a Pod with a container specifies an AppArmor profile by field/annotation,
then the container will only apply the Pod level field/annotation if none
are set on the container level.
Container-level AppArmor profiles override anything set at the pod-level.

Comment on lines +314 to +318
To raise awareness of annotation usage (in case of old automation), a warning
mechanism will be used to highlight that support will be dropped in v1.30.
The mechanisms being considered are audit annotations, annotations on the
object, events, or a warning as described in [KEP
#1693](/keps/sig-api-machinery/1693-warnings).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs to be updated.

Comment on lines +345 to +347
To raise awareness of existing controllers using the AppArmor annotations that
need to be migrated, a warning mechanism will be used to highlight that support
will be dropped in v1.30.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs update.

#### PodTemplates

PodTemplates (e.g. ReplaceSets, Deployments, StatefulSets, etc.) will be
ignored. The field/annotation resolution will happen on template instantiation.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the implications for PodSecurity admission?

Comment on lines +373 to +381
Since
[we support](https://kubernetes.io/docs/setup/release/version-skew-policy/) up
to 2 minor releases of version skew between the master and node, annotations
must continue to be supported and backfilled for at least 2 versions passed the
initial implementation. If this feature is implemented in v1.27, I propose v1.30 as a
target for removal of the old behavior. Specifically, annotation support will be removed
in the kubelet after this period, and fields will no longer be copied to annotations for older kubelet
versions. However, annotations submitted to the API server will continue to be copied to fields at the
kubelet indefinitely, as was done with Seccomp.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Timeline needs update

@mccormickt
Copy link
Author

mccormickt commented Jan 14, 2024

@mccormickt I'd like to drive this forward in v1.30. Are you still open to working on it, or should I take over this PR?

Thanks @tallclair, please feel free to take this over. Apologies for the lack of recent communication, life has gotten in the way of progress here as of late. Let me know if I can help clarify any gaps.

@tallclair
Copy link
Member

Closing in favor of #4417.

@tallclair tallclair closed this Jan 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

Successfully merging this pull request may close these issues.